Multi-kernel Passive Stochastic Gradient Algorithms and Transfer Learning

نویسندگان

چکیده

This paper develops a novel passive stochastic gradient algorithm. In approximation, the algorithm does not have control over location where noisy gradients of cost function are evaluated. Classical algorithms use kernel that approximates Dirac delta to weigh based on how far they evaluated from desired point. this we construct multi-kernel The performs substantially better in high dimensional problems and incorporates variance reduction. We analyze weak convergence its rate convergence. numerical examples, study version least mean squares (LMS) for transfer learning compare performance with classical version.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Application of kernel-based stochastic gradient algorithms to option pricing

We present an algorithm for American option pricing based on stochastic approximation techniques. Option pricing algorithms generally involve some sort of discretization, either on the state space or on the underlying functional space. Our work, which is an application of a more general perturbed gradient algorithm introduced recently by the authors, consists in approximating the value function...

متن کامل

Stochastic Proximal Gradient Algorithms for Multi-Source Quantitative Photoacoustic Tomography

The development of accurate and efficient image reconstruction algorithms is a central aspect of quantitative photoacoustic tomography (QPAT). In this paper, we address this issues for multi-source QPAT using the radiative transfer equation (RTE) as accurate model for light transport. The tissue parameters are jointly reconstructed from the acoustical data measured for each of the applied sourc...

متن کامل

Adaptive natural gradient learning algorithms for various stochastic models

The natural gradient method has an ideal dynamic behavior which resolves the slow learning speed of the standard gradient descent method caused by plateaus. However, it is required to calculate the Fisher information matrix and its inverse, which makes the implementation of the natural gradient almost impossible. To solve this problem, a preliminary study has been proposed concerning an adaptiv...

متن کامل

On Stochastic Proximal Gradient Algorithms

We study a perturbed version of the proximal gradient algorithm for which the gradient is not known in closed form and should be approximated. We address the convergence and derive a non-asymptotic bound on the convergence rate for the perturbed proximal gradient, a perturbed averaged version of the proximal gradient algorithm and a perturbed version of the fast iterative shrinkagethresholding ...

متن کامل

Generalized Stochastic Gradient Learning∗

We study the properties of the generalized stochastic gradient (GSG) learning in forward-looking models. GSG algorithms are a natural and convenient way to model learning when agents allow for parameter drift or robustness to parameter uncertainty in their beliefs. The conditions for convergence of GSG learning to a rational expectations equilibrium are distinct from but related to the well-kno...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Automatic Control

سال: 2021

ISSN: ['0018-9286', '1558-2523', '2334-3303']

DOI: https://doi.org/10.1109/tac.2021.3079280